600 research outputs found

    mRNP transport starts in the nucleus

    Get PDF

    Automatically learning patterns for self-admitted technical debt removal

    Get PDF
    Technical Debt (TD) expresses the need for improvements in a software system, e.g., to its source code or architecture. In certain circumstances, developers “self-admit” technical debt (SATD) in their source code comments. Previous studies investigate when SATD is admitted, and what changes developers perform to remove it. Building on these studies, we present a first step towards the automated recommendation of SATD removal strategies. By leveraging a curated dataset of SATD removal patterns, we build a multi-level classifier capable of recommending six SATD removal strategies, e.g., changing API calls, conditionals, method signatures, exception handling, return statements, or telling that a more complex change is needed. SARDELE (SAtd Removal using DEep LEarning) combines a convolutional neural network trained on embeddings extracted from the SATD comments with a recurrent neural network trained on embeddings extracted from the SATD-affected source code. Our evaluation reveals that SARDELE is able to predict the type of change to be applied with an average precision of ~55%, recall of ~57%, and AUC of 0.73, reaching up to 73% precision, 63% recall, and 0.74 AUC for certain categories such as changes to method calls. Overall, results suggest that SATD removal follows recurrent patterns and indicate the feasibility of supporting developers in this task with automated recommenders

    How Clones are Maintained: An Empirical Study

    Full text link
    Despite the conventional wisdom concerning the risks related to the use of source code cloning as a software development strategy, several studies appeared in literature indicated that this is not true. In most cases clones are properly maintained and, when this does not happen, is because cloned code evolves independently. Stemming from previous works, this paper combines clone detection and co–change analysis to investigate how clones are maintained when an evolution activity or a bug fixing impact a source code fragment belonging to a clone class. The two case studies reported confirm that, either for bug fixing or for evolution purposes, most of the cloned code is consistently maintained during the same co–change or during temporally close co–changes

    The Latin Leaflet

    Get PDF
    In the present work, we apply the asymptotic homogenization technique to the equations describing the dynamics of a heterogeneous material with evolving micro-structure, thereby obtaining a set of upscaled, effective equations. We consider the case in which the heterogeneous body comprises two hyperelastic materials and we assume that the evolution of their micro-structure occurs through the development of plastic-like distortions, the latter ones being accounted for by means of the Bilby–Kröner–Lee (BKL) decomposition. The asymptotic homogenization approach is applied simultaneously to the linear momentum balance law of the body and to the evolution law for the plastic-like distortions. Such evolution law models a stress-driven production of inelastic distortions, and stems from phenomenological observations done on cellular aggregates. The whole study is also framed within the limit of small elastic distortions, and provides a robust framework that can be readily generalized to growth and remodeling of nonlinear composites. Finally, we complete our theoretical model by performing numerical simulations

    Characterizing the evolution of statically-detectable performance issues of Android apps

    Get PDF
    Mobile apps are playing a major role in our everyday life, and they are tending to become more and more complex and resource demanding. Because of that, performance issues may occur, disrupting the user experience or, even worse, preventing an effective use of the app. Ultimately, such problems can cause bad reviews and influence the app success. Developers deal with performance issues thorough dynamic analysis, i.e., performance testing and profiler tools, albeit static analysis tools can be a valid, relatively inexpensive complement for the early detection of some such issues. This paper empirically investigates how potential performance issues identified by a popular static analysis tool — Android Lint — are actually resolved in 316 open source Android apps among 724 apps we analyzed. More specifically, the study traces the issues detected by Android Lint since their introduction until they resolved, with the aim of studying (i) the overall evolution of performance issues in apps, (ii) the proportion of issues being resolved, as well as (iii) the distribution of their survival time, and (iv) the extent to which issue resolution are documented by developers in commit messages. Results indicate how some issues, especially related to the lack of resource recycle, tend to be more frequent than others. Also, while some issues, primarily of algorithmic nature, tend to be resolved quickly through well-known patterns, others tend to stay in the app longer, or not to be resolved at all. Finally, we found how only 10% of the issue resolution is documented in commit messages

    Towards Automatically Addressing Self-Admitted Technical Debt: How Far Are We?

    Full text link
    Upon evolving their software, organizations and individual developers have to spend a substantial effort to pay back technical debt, i.e., the fact that software is released in a shape not as good as it should be, e.g., in terms of functionality, reliability, or maintainability. This paper empirically investigates the extent to which technical debt can be automatically paid back by neural-based generative models, and in particular models exploiting different strategies for pre-training and fine-tuning. We start by extracting a dateset of 5,039 Self-Admitted Technical Debt (SATD) removals from 595 open-source projects. SATD refers to technical debt instances documented (e.g., via code comments) by developers. We use this dataset to experiment with seven different generative deep learning (DL) model configurations. Specifically, we compare transformers pre-trained and fine-tuned with different combinations of training objectives, including the fixing of generic code changes, SATD removals, and SATD-comment prompt tuning. Also, we investigate the applicability in this context of a recently-available Large Language Model (LLM)-based chat bot. Results of our study indicate that the automated repayment of SATD is a challenging task, with the best model we experimented with able to automatically fix ~2% to 8% of test instances, depending on the number of attempts it is allowed to make. Given the limited size of the fine-tuning dataset (~5k instances), the model's pre-training plays a fundamental role in boosting performance. Also, the ability to remove SATD steadily drops if the comment documenting the SATD is not provided as input to the model. Finally, we found general-purpose LLMs to not be a competitive approach for addressing SATD
    • …
    corecore